24 research outputs found
Non-attracting Regions of Local Minima in Deep and Wide Neural Networks
Understanding the loss surface of neural networks is essential for the design
of models with predictable performance and their success in applications.
Experimental results suggest that sufficiently deep and wide neural networks
are not negatively impacted by suboptimal local minima. Despite recent
progress, the reason for this outcome is not fully understood. Could deep
networks have very few, if at all, suboptimal local optima? or could all of
them be equally good? We provide a construction to show that suboptimal local
minima (i.e., non-global ones), even though degenerate, exist for fully
connected neural networks with sigmoid activation functions. The local minima
obtained by our construction belong to a connected set of local solutions that
can be escaped from via a non-increasing path on the loss curve. For extremely
wide neural networks of decreasing width after the wide layer, we prove that
every suboptimal local minimum belongs to such a connected set. This provides a
partial explanation for the successful application of deep neural networks. In
addition, we also characterize under what conditions the same construction
leads to saddle points instead of local minima for deep neural networks
Corrigendum to “Regularity for Stably Projectionless, Simple C ∗ -Algebras”
Peer reviewedPostprin
On the regularization of Wasserstein GANs
Since their invention, generative adversarial networks (GANs) have become a
popular approach for learning to model a distribution of real (unlabeled) data.
Convergence problems during training are overcome by Wasserstein GANs which
minimize the distance between the model and the empirical distribution in terms
of a different metric, but thereby introduce a Lipschitz constraint into the
optimization problem. A simple way to enforce the Lipschitz constraint on the
class of functions, which can be modeled by the neural network, is weight
clipping. It was proposed that training can be improved by instead augmenting
the loss by a regularization term that penalizes the deviation of the gradient
of the critic (as a function of the network's input) from one. We present
theoretical arguments why using a weaker regularization term enforcing the
Lipschitz constraint is preferable. These arguments are supported by
experimental results on toy data sets.Comment: Published as a conference paper at ICLR 2018. * Henning Petzka and
Asja Fischer contributed equally to this work (11 pages +13 pages appendix
Comparison properties of the Cuntz semigroup and applications to C*-algebras
We study comparison properties in the category Cu aiming to lift results to the C*-algebraic setting. We introduce a new comparison property and relate it to both the CFP and -comparison. We show differences of all properties by providing examples, which suggest that the corona factorization for C*-algebras might allow for both finite and infinite projections. In addition, we show that R{\o}rdam's simple, nuclear C*-algebra with a finite and an infinite projection does not have the CFP
Corrigendum to “Regularity for stably projectionless, simple C⁎-algebras” [J. Funct. Anal. 263 (2012) 1382–1407]
Research partially supported by EPSRC (grant no. EP/N002377/1), NSERC (PDF, held by AT), and by the DFG (SFB 878).Peer reviewedPostprin